Machine-learning classifiers can be leveraged as a two-sample statistical test. Suppose each sample is assigned a different label and that a classifier can obtain a better-than-chance result discriminating them. In this case, we can infer that both samples originate from different populations. However, many types of models, such as neural networks, behave as a black-box for the user: they can reject that both samples originate from the same population, but they do not offer insight into how both samples differ. Self-Organizing Maps are a dimensionality reduction initially devised as a data visualization tool that displays emergent properties, being also useful for classification tasks. Since they can be used as classifiers, they can be used also as a two-sample statistical test. But since their original purpose is visualization, they can also offer insights.
translated by 谷歌翻译
Few-shot (FS) and zero-shot (ZS) learning are two different approaches for scaling temporal action detection (TAD) to new classes. The former adapts a pretrained vision model to a new task represented by as few as a single video per class, whilst the latter requires no training examples by exploiting a semantic description of the new class. In this work, we introduce a new multi-modality few-shot (MMFS) TAD problem, which can be considered as a marriage of FS-TAD and ZS-TAD by leveraging few-shot support videos and new class names jointly. To tackle this problem, we further introduce a novel MUlti-modality PromPt mETa-learning (MUPPET) method. This is enabled by efficiently bridging pretrained vision and language models whilst maximally reusing already learned capacity. Concretely, we construct multi-modal prompts by mapping support videos into the textual token space of a vision-language model using a meta-learned adapter-equipped visual semantics tokenizer. To tackle large intra-class variation, we further design a query feature regulation scheme. Extensive experiments on ActivityNetv1.3 and THUMOS14 demonstrate that our MUPPET outperforms state-of-the-art alternative methods, often by a large margin. We also show that our MUPPET can be easily extended to tackle the few-shot object detection problem and again achieves the state-of-the-art performance on MS-COCO dataset. The code will be available in https://github.com/sauradip/MUPPET
translated by 谷歌翻译
最近发布的EGO4D数据集和基准测试显着缩放,并使第一人称视觉感知数据多样化。在EGO4D中,视觉查询2D本地化任务旨在从第一人称视图中的录制中检索过去出现的对象。此任务需要一个系统才能在空间和时间上定位给定对象查询的最新外观,其中查询在不同场景中被对象的单个紧密视觉作物注册。我们的研究基于情节记忆基准中引入的三阶段基线。基线通过检测和跟踪解决问题:检测所有帧中的相似对象,然后从最自信的检测结果中运行跟踪器。在VQ2D挑战中,我们确定了当前基线的两个局限性。 (1)训练配置具有冗余计算。尽管培训集有数百万个实例,但其中大多数是重复的,唯一对象的数量仅为14.6k。相同对象的重复梯度计算导致效率低下的训练; (2)背景框架上的误报率很高。这是由于培训和评估之间的分布差距。在培训期间,该模型只能看到干净,稳定和标记的框架,但是以自我为中心的视频也具有嘈杂,模糊或未标记的背景框架。为此,我们开发了一个更有效的解决方案。具体来说,我们将训练环从约15天提高到不到24小时,并且达到了0.17%的空间AP,比基线高31%。我们的解决方案在公共排行榜上获得了第一个排名。我们的代码可在https://github.com/facebookresearch/vq2d_cvpr上公开获取。
translated by 谷歌翻译
近年来,计算创造性领域的研究人员研究了人类创意过程,提出了用正式程序重现它的不同方法。在本文中,我们向西班牙语中的文学押韵产生了一种模型,语言和神经网络模型的结构(\ Textit {Word2Vec})。%,进入语义同化的结构。通过手动评估由我们的算法产生的文本获得的结果是令人鼓舞的。
translated by 谷歌翻译
这项工作旨在评估概率和最先进的矢量空间建模(VSM)方法提供众所周知的机器学习算法,以识别社交网络文档,以归类为攻击性,性别偏见或相互信任。为此,首先执行探索阶段,以便找到要测试的相关设置,即通过使用培训和开发样本,我们使用多个Vector Space建模和概率方法培训多种算法,并丢弃了更少的信息配置。这些系统已提交逗号@ ICON'21研讨会竞争,就多语种性别偏见和公共语言识别。
translated by 谷歌翻译
最近,很少拍摄的视频分类已经获得了越来越令人利益。目前的方法主要集中在有效利用视频中的时间维度,以在低数据制度下改善学习。然而,大多数作品在很大程度上忽略了视频通常伴随着丰富的文本描述,也可以是处理少量拍摄识别情况的重要信息来源。在本文中,我们建议利用这些人提供的文本描述作为培训几次视频分类模型时的特权信息。具体来说,我们制定了一种基于文本的任务调节器,以使视频功能适应几次拍摄的学习任务。此外,我们的模型遵循转换设置,通过使用支持文本描述和查询实例来更新一组类原型来提高模型的任务适应能力。我们的模型在四个具有挑战性的基准测试中实现了最先进的性能,通常用于评估少量拍摄视频动作分类模型。
translated by 谷歌翻译